Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Portrait Mode on iPhone SE relies only on machine learning

The iPhone SE can create depth maps from a single 2D image using machine learning. Credit: Halide

Apple's new iPhone SE is the company's first — and thus far, only — iPhone to solely rely on machine learning for Portrait Mode depth estimation.

The iPhone SE, released in April, appears to be largely a copy of the iPhone 8, right down to the single-lens monocular camera. But, under the hood, there's much more going on for depth estimation than any other iPhone before it.

According to a blog post from the makers of camera app Halide, the iPhone SE is the first in Apple's lineup to use "Single Image Monocular Depth Estimation." That means it's the first iPhone that can create a portrait blur effect using just a single 2D image.

In past iPhones, Portrait Mode has required at least two cameras. That's because the best source of depth information has long been comparing two images coming from two slightly different places. Once the system compares those images, it can separate the subject of a photo from the background, allowing for the blurred or "bokeh effect."

The iPhone XR changed that, introducing Portrait Mode support through the use of sensor "focus pixels," which could produce a rough depth map. But while the new iPhone SE has focus pixels, its older hardware lacks the coverage requisite for depth-mapping purposes.

"The new iPhone SE can't use focus pixels, because its older sensor doesn't have enough coverage," Halide's Ben Sandofsky wrote. An iFixit teardown revealed on Monday that the iPhone SE's camera sensor is basically interchangeable with the iPhone 8's.

Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.

The depth information isn't perfect, Halide points out, but it's an impressive feat given the relative hardware limitations of a three-year-old, single-sensor camera setup. Similarly, Portrait Mode on the iPhone SE only works on people, but Halide says the new version of its app allows bokeh effects on non-human subjects on the iPhone SE.



6 Comments

KITA 6 Years · 402 comments

Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.

Is this any different from Google's single lens portrait mode (no split pixels either) that's able to run on even mid-range Snapdragon SoCs from years ago? I don't see why you would buff the A13 Bionic otherwise, seemingly older iPhones, such as the iPhone 8, should be able to process this one would presume.

❄️
MplsP 8 Years · 4050 comments

KITA said:
Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.
Is this any different from Google's single lens portrait mode (no split pixels either) that's able to run on even mid-range Snapdragon SoCs from years ago? I don't see why you would buff the A13 Bionic otherwise, seemingly older iPhones, such as the iPhone 8, should be able to process this one would presume.

Agreed. There's nothing wrong with the A13, but this is image processing that can occur after the photo is taken; it doesn't need to occur in real time and thus shouldn't need a top of the line processor.

Still, it's cool that they added the feature; since it's software based, I'm hoping they can add it to other older phones as well.

🎁
Xed 4 Years · 2896 comments

MplsP said:
KITA said:
Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.
Is this any different from Google's single lens portrait mode (no split pixels either) that's able to run on even mid-range Snapdragon SoCs from years ago? I don't see why you would buff the A13 Bionic otherwise, seemingly older iPhones, such as the iPhone 8, should be able to process this one would presume.
Agreed. There's nothing wrong with the A13, but this is image processing that can occur after the photo is taken; it doesn't need to occur in real time and thus shouldn't need a top of the line processor.

Still, it's cool that they added the feature; since it's software based, I'm hoping they can add it to other older phones as well.

I have yet to test the feature on the new SE but I assumed that it does occur in real time so you can see how it will look before you take the shot.

☕️
gatorguy 13 Years · 24636 comments

Xed said:
MplsP said:
KITA said:
Instead, the entry-level iPhone produces depth maps entirely through machine learning. That also means that it can produce Portrait Mode photos from both its front- and rear-facing cameras. That's something undoubtedly made possible by the top-of-the-line A13 Bionic chipset underneath its hood.
Is this any different from Google's single lens portrait mode (no split pixels either) that's able to run on even mid-range Snapdragon SoCs from years ago? I don't see why you would buff the A13 Bionic otherwise, seemingly older iPhones, such as the iPhone 8, should be able to process this one would presume.
Agreed. There's nothing wrong with the A13, but this is image processing that can occur after the photo is taken; it doesn't need to occur in real time and thus shouldn't need a top of the line processor.

Still, it's cool that they added the feature; since it's software based, I'm hoping they can add it to other older phones as well.
I have yet to test the feature on the new SE but I assumed that it does occur in real time so you can see how it will look before you take the shot.

I believe the Pixels have done so in realtime since 2017, and it's been open-sourced for some time now for other smartphones. Of course the SE may accomplish it differently and the results could be even better. I've not seen any reviews on the feature TBH. 

☕️
EsquireCats 8 Years · 1268 comments

The core difference between this and Google's implementation is that the iPhone doesn't rely on a HDR+ shot nor the subtle variation in the background across the different areas of the same lens. What this means is that, with an iPhone SE, you can take a photo of an old printed photograph and still get the portrait effect - that's where the A13 becomes useful in keeping this process usably fast and power efficient.

However that said, all smartphone portrait modes have mixed results, sometimes they're good, sometimes they suck, and there are somethings that can't be properly simulated (e.g. when photographing objects that have lensed the background or a curved reflection.)